Uniform {Ch,S(Ch)}-Factorizations of Kn−I for Even h

نویسندگان

چکیده

Let H be a connected subgraph of graph G. An H-factor G is spanning whose components are isomorphic to H. Given set mutually non-isomorphic graphs, uniform H-factorization partition the edges into H-factors for some H∈H. In this article, we give complete solution existence problem H-factorizations Kn−I (the obtained by removing 1-factor from Kn) H={Ch,S(Ch)}, where Ch cycle length an even integer h≥4 and S(Ch) consisting with pendant edge attached each vertex.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Embedding Factorizations for 3-Uniform Hypergraphs II: $r$-Factorizations into $s$-Factorizations

Motivated by a 40-year-old problem due to Peter Cameron on extending partial parallelisms, we provide necessary and sufficient conditions under which one can extend an r-factorization of a complete 3-uniform hypergraph on m vertices, K3 m, to an s-factorization of K3 n. This generalizes an existing result of Baranyai and Brouwer–where they proved it for the case r = s = 1.

متن کامل

Sequentially Perfect and Uniform One-Factorizations of the Complete Graph

In this paper, we consider a weakening of the definitions of uniform and perfect one-factorizations of the complete graph. Basically, we want to order the 2n − 1 one-factors of a one-factorization of the complete graph K2n in such a way that the union of any two (cyclically) consecutive one-factors is always isomorphic to the same two-regular graph. This property is termed sequentially uniform;...

متن کامل

Some Properties of Even Moments of Uniform Random Walks

We build upon previous work on the densities of uniform random walks in higher dimensions, exploring some properties of the even moments of these densities and extending a result about their modularity.

متن کامل

Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling

Accelerated coordinate descent is widely used in optimization due to its cheap per-iteration cost and scalability to large-scale problems. Up to a primal-dual transformation, it is also the same as accelerated stochastic gradient descent that is one of the central methods used in machine learning. In this paper, we improve the best known running time of accelerated coordinate descent by a facto...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics

سال: 2023

ISSN: ['2227-7390']

DOI: https://doi.org/10.3390/math11163479